18 research outputs found
Design and engineering of a simplified workflow execution for the MG5aMC event generator on GPUs and vector CPUs
Physics event generators are essential components of the data analysis
software chain of high energy physics experiments, and important consumers of
their CPU resources. Improving the software performance of these packages on
modern hardware architectures, such as those deployed at HPC centers, is
essential in view of the upcoming HL-LHC physics programme. In this paper, we
describe an ongoing activity to reengineer the Madgraph5_aMC@NLO physics event
generator, primarily to port it and allow its efficient execution on GPUs, but
also to modernize it and optimize its performance on vector CPUs. We describe
the motivation, engineering process and software architecture design of our
developments, as well as the current challenges and future directions for this
project. This paper is based on our submission to vCHEP2021 in March
2021,complemented with a few preliminary results that we presented during the
conference. Further details and updated results will be given in later
publications.Comment: 17 pages, 6 figures, submitted to vCHEP2021 proceedings in EPJ Web of
Conferences; minor changes to address comments from the EPJWOC reviewe
Software Challenges For HL-LHC Data Analysis
The high energy physics community is discussing where investment is needed to
prepare software for the HL-LHC and its unprecedented challenges. The ROOT
project is one of the central software players in high energy physics since
decades. From its experience and expectations, the ROOT team has distilled a
comprehensive set of areas that should see research and development in the
context of data analysis software, for making best use of HL-LHC's physics
potential. This work shows what these areas could be, why the ROOT team
believes investing in them is needed, which gains are expected, and where
related work is ongoing. It can serve as an indication for future research
proposals and cooperations
Search for the Decay of the Standard Model Higgs Boson in Associated Production with Vector Bosons Using ATLAS Data at = 8 TeV
In this thesis, a search for the decay of the Standard-Model Higgs boson is presented. A dataset of recorded by the ATLAS detector at CERN is searched for events, where Higgs bosons are produced in association with a vector boson that decays to charged leptons at a centre-of-mass energy of . Machine learning with boosted decision trees is used to implement the most sensitive search on LHC data with charged leptons, the ATLAS search published in 2015. It is further shown that the machine-learning part of this search can be improved by training boosted decision trees with Lorentz-invariant observables, which increases the sensitivity by 10 %, and reduces systematic uncertainties by 16 %. For a Higgs boson of , a local excess over the background prediction at 99.2 % confidence level is observed, which corresponds to a discovery significance of , whereas were expected from simulations. The ratio of the observed signal strength compared to the Standard-Model prediction is
Measurement of Cross Sections and Properties of the Higgs Boson Using the ATLAS Detector
This talk presents an overview of Higgs measurements on Run 2 data from 2015 and 2016. Measurement of fiducial and differential Higgs cross sections in decays to Bosons and updates on searches for fermionic Higgs decays or ttH production are discussed
What the new RooFit can do for your analysis
RooFit is a toolkit for statistical modelling and fitting, and together with RooStats it is used for measurements and statistical tests by most experiments in particle physics. Since one year, RooFit is being modernised. In this talk, improvements already released with ROOT will be discussed, such as faster data loading, vectorised computations and more standard-like interfaces. These allow for speeding up unbinned fits by several factors, and make RooFit easier to use from both C++ and Python.RooFit is a toolkit for statistical modelling and fitting, and together with RooStats it is used for measurements and statistical tests by most experiments in particle physics. Since one year, RooFit is being modernised.In this talk, improvements already released with ROOT will be discussed, such as faster data loading, vectorised computations and more standard-like interfaces. These allow for speeding up unbinned fits by several factors, and make RooFit easier to use from both C++ and Python
GPU programming
Lecture ContentsIn this lecture, we will look into the basics of GPU computing to understand in which circumstances the usage of GPUs is beneficial for scientific computing. Using the Nvidia CUDA GPUs as examples, will learn how the hardware works, which guides us towards how it has to be programmed.RequirementsStudents should have written basic C/C++ programs before, and should be familiar with pointers and arrays and/or vectors of data.Hands on: We have the option to play with a few basic CUDA applications. There's three ways in which you can participate:Linux / Mac laptop with ssh.Any laptop and a browser. Please register on the current page to be granted access to GPUs on CERN's SWAN cluster. You will need a cernbox account, which is created once you visit https://cernbox.cern.ch. Note that the number of GPUs is limited.Team programming. If the two above are not for you, we will team up and work together.Short Speaker BioStephan obtained a PhD in particle physics, searching for decays of the Higgs boson with the ATLAS detector. Afterwards, he worked for the ROOT project at CERN, focussing on high-throughput computing and RooFit, a package for fitting and statistical analysis of data. Now, Stephan is a computing engineer in CERN IT's innovation group, focussing on GPU computing for high-energy physics.</p
GPU programming
Lecture Contents
In this lecture, we will look into the basics of GPU computing to understand in which circumstances the usage of GPUs is beneficial for scientific computing. Using the Nvidia CUDA GPUs as examples, will learn how the hardware works, which guides us towards how it has to be programmed.
Requirements
Students should have written basic C/C++ programs before, and should be familiar with pointers and arrays and/or vectors of data.
Hands on: We have the option to play with a few basic CUDA applications. There's three ways in which you can participate:
Linux / Mac laptop with ssh.
Any laptop and a browser. Please register on the current page to be granted access to GPUs on CERN's SWAN cluster. You will need a cernbox account, which is created once you visit https://cernbox.cern.ch. Note that the number of GPUs is limited.
Team programming. If the two above are not for you, we will team up and work together.
Short Speaker Bio
Stephan obtained a PhD in particle physics, searching for decays of the Higgs boson with the ATLAS detector. Afterwards, he worked for the ROOT project at CERN, focussing on high-throughput computing and RooFit, a package for fitting and statistical analysis of data. Now, Stephan is a computing engineer in CERN IT's innovation group, focussing on GPU computing for high-energy physics.</p
Acceleration with GPUs and other RooFit news
RooFit is a toolkit for statistical modeling and fitting, and together with RooStats it is used for measurements and statistical tests by most experiments in particle physics, particularly the LHC experiments. As the LHC program progresses, physics analysis becomes more computationally demanding. Therefore, RooFit development in recent years is focused on modernizing RooFit, improving its ease of use, and on performance optimization. This paper presents the new RooFit vectorized computation mode, which supports calculations on the GPU. Additionally, we discuss new features in the upcoming ROOT 6.26 release, highlighting the new pythonizations in particular
Faster RooFitting: Automated parallel calculation of collaborative statistical models
RooFit [1, 2] is the main statistical modeling and fitting package used to extract physical parameters from reduced particle collision data, e.g. the Higgs boson experiments at the LHC [3, 4]. RooFit aims to separate particle physics model building and fitting (the usersâ goals) from their technical implementation and optimization in the back-end. In this paper, we outline our efforts to further optimize this back-end by automatically running parts of user models in parallel on multi-core machines. A major challenge is that RooFit allows users to define many different types of models, with different types of computational bottlenecks. Our automatic parallelization framework must then be flexible, while still reducing run time by at least an order of magnitude, preferably more. We have performed extensive benchmarks and identified at least three bottlenecks that will benefit from parallelization. We designed a parallelization framework that allows us to parallelize likelihood minimization with high performance by splitting over partial derivatives in the minimizer. The basis of the framework is a task queue approach. Preliminary results show speed-ups of factor 2 to 20, depending on the exact model and parallelization strategy